19 research outputs found

    Heap Reference Analysis Using Access Graphs

    Full text link
    Despite significant progress in the theory and practice of program analysis, analysing properties of heap data has not reached the same level of maturity as the analysis of static and stack data. The spatial and temporal structure of stack and static data is well understood while that of heap data seems arbitrary and is unbounded. We devise bounded representations which summarize properties of the heap data. This summarization is based on the structure of the program which manipulates the heap. The resulting summary representations are certain kinds of graphs called access graphs. The boundedness of these representations and the monotonicity of the operations to manipulate them make it possible to compute them through data flow analysis. An important application which benefits from heap reference analysis is garbage collection, where currently liveness is conservatively approximated by reachability from program variables. As a consequence, current garbage collectors leave a lot of garbage uncollected, a fact which has been confirmed by several empirical studies. We propose the first ever end-to-end static analysis to distinguish live objects from reachable objects. We use this information to make dead objects unreachable by modifying the program. This application is interesting because it requires discovering data flow information representing complex semantics. In particular, we discover four properties of heap data: liveness, aliasing, availability, and anticipability. Together, they cover all combinations of directions of analysis (i.e. forward and backward) and confluence of information (i.e. union and intersection). Our analysis can also be used for plugging memory leaks in C/C++ languages.Comment: Accepted for printing by ACM TOPLAS. This version incorporates referees' comment

    Trans_Proc: A Processor to Implement the Linear Transformations on the Image and Signal Processing and Its Future Scope

    No full text
    We present here Transproc, a reconfigurable generic processor which can execute operations related to linear transformations like FFT, FDCT or FDWT. A graph theoretic lemma is used to find the applicability of such a processor to calculate the flow graph related parallel operations found in these linear transformations. The architecture level design and processing element level design is presented. The primitive instruction set and the control signal implementing the instruction set is proposed. A detailed simulation validating the correctness of PE level and the architecture level data calculation and routing operations are carried out using Xilinx Vivado Webpack. The result related to size, power and timing requirement is presented

    A PVS based framework for validating compiler optimizations

    No full text
    Abstract An optimization can be specified as sequential compo-sitions of predefined transformation primitives. For each primitive, we can define soundness conditions which guar-antee that the transformation is semantics preserving. An optimization of a program preserves semantics, if all ap-plications of the primitives in the optimization satisfy their respective soundness conditions on the versions of the inputprogram on which they are applied. This scheme does not directly check semantic equivalence of the input and the op-timized programs and is therefore amenable to automation. Automating this scheme however requires a trustedframework for simulating transformation primitives and checking their soundness conditions. In this paper, wepresent the design of such a framework based on PVS. We have used it for specifying and validating several optimiza-tions viz. common subexpression elimination, optimal code placement, lazy code motion, loop invariant code motion,full and partial dead code elimination, etc. 1

    Data flow analysis: theory and practice

    No full text
    Data flow analysis is used to discover information for a wide variety of useful applications, ranging from compiler optimizations to software engineering and verification. Modern compilers apply it to produce performance-maximizing code, and software engineers use it to re-engineer or reverse engineer programs and verify the integrity of their programs.  Supplementary Online Materials to Strengthen Understanding Unlike most comparable books, many of which are limited to bit vector frameworks and classical constant propagation, Data Flow Analysis: Theory and Practice offers comprehensive cover
    corecore